Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyper log log plus plus(HLL++) #2522

Open
wants to merge 19 commits into
base: branch-25.02
Choose a base branch
from
Open

Conversation

res-life
Copy link
Collaborator

@res-life res-life commented Oct 21, 2024

@res-life res-life requested a review from ttnghia October 21, 2024 12:45
@res-life res-life force-pushed the hll branch 3 times, most recently from b6f5cf5 to 526a61f Compare October 31, 2024 11:34
@res-life res-life changed the title [Do not review] Hyper log log plus plus(HLL++) Hyper log log plus plus(HLL++) Oct 31, 2024
rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr)
{
CUDF_EXPECTS(precision >= 4 && precision <= 18, "HLL++ requires precision in range: [4, 18]");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use std::numeric_limits<>::digits instead of hardcoded values 4 and 18.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cuCo hardcoded 4, and Spark also hardcoded 4.

auto input_cols = std::vector<int64_t const*>(input_iter, input_iter + input.num_children());
auto d_inputs = cudf::detail::make_device_uvector_async(input_cols, stream, mr);
auto result = cudf::make_numeric_column(
cudf::data_type{cudf::type_id::INT64}, input.size(), cudf::mask_state::ALL_VALID, stream);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need such all-valid null mask? How about cudf::mask_state::UNALLOCATED?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested Spark behavior, for approx_count_distinct(null) returns 0.
So the values in result column are always non-null

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant, if all rows are valid, we don't need to allocate a null mask.
BTW, we need to pass mr to the returning column (but do not pass it to the intermediate vector/column).

Suggested change
cudf::data_type{cudf::type_id::INT64}, input.size(), cudf::mask_state::ALL_VALID, stream);
cudf::data_type{cudf::type_id::INT64}, input.size(), cudf::mask_state::UNALLOCATED, stream, mr);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

auto result = cudf::make_numeric_column(
cudf::data_type{cudf::type_id::INT64}, input.size(), cudf::mask_state::ALL_VALID, stream);
// evaluate from struct<long, ..., long>
thrust::for_each_n(rmm::exec_policy(stream),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to use exec_policy_nosync as much as possible.

Suggested change
thrust::for_each_n(rmm::exec_policy(stream),
thrust::for_each_n(rmm::exec_policy_nosync(stream),

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Comment on lines 34 to 36
* The input sketch values must be given in the format `LIST<INT8>`.
*
* @param input The sketch column which constains `LIST<INT8> values.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

INT8 or INT64?

Copy link
Collaborator

@ttnghia ttnghia Nov 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition, in estimate_from_hll_sketches I see that the input is STRUCT<LONG, LONG, ....> instead of LIST<>. Why?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's STRUCT<LONG, LONG, ....> consistent with Spark. The input is columnar data, e.g.: sketch 0 is composed of by all the data of the children at index 0.
Updated the function comments, refer to commit

@res-life
Copy link
Collaborator Author

Ready to review except test cases.

@@ -196,6 +196,7 @@ add_library(
src/HashJni.cpp
src/HistogramJni.cpp
src/HostTableJni.cpp
src/HLLPPJni.cpp
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's try to be generic.

Suggested change
src/HLLPPJni.cpp
src/AggregationJni.cpp

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to: HLLPPHostUDFJni
AggregationJni is too generic

@@ -204,6 +205,7 @@ add_library(
src/SparkResourceAdaptorJni.cpp
src/SubStringIndexJni.cpp
src/ZOrderJni.cpp
src/HLLPP.cu
Copy link
Collaborator

@ttnghia ttnghia Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about HyperLogLogPP?

Suggested change
src/HLLPP.cu
src/HyperLogLogPP.cu

This name is also applied for the .hpp and *.java files.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -0,0 +1,102 @@
/*
* Copyright (c) 2023-2024, NVIDIA CORPORATION.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Copyright (c) 2023-2024, NVIDIA CORPORATION.
* Copyright (c) 2024-2025, NVIDIA CORPORATION.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Comment on lines 50 to 51
int64_t shift_mask = MASK << (REGISTER_VALUE_BITS * reg_idx);
int64_t v = (long_10_registers & shift_mask) >> (REGISTER_VALUE_BITS * reg_idx);
Copy link
Collaborator

@ttnghia ttnghia Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
int64_t shift_mask = MASK << (REGISTER_VALUE_BITS * reg_idx);
int64_t v = (long_10_registers & shift_mask) >> (REGISTER_VALUE_BITS * reg_idx);
auto const shift_bits = REGISTER_VALUE_BITS * reg_idx;
auto const shift_mask = MASK << shift_bits;
auto const v = (long_10_registers & shift_mask) >> shift_bit;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}

struct estimate_fn {
cudf::device_span<int64_t const*> sketch_longs;
Copy link
Collaborator

@ttnghia ttnghia Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cudf::device_span<int64_t const*> sketch_longs;
cudf::device_span<int64_t const*> sketches;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 57 to 58
int const precision;
int64_t* const out;
Copy link
Collaborator

@ttnghia ttnghia Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We now favor non-const members so the functor can be moved by the compiler if needed.
In addition, member variables need to be sorted by their sizes to reduce padding.

Suggested change
int const precision;
int64_t* const out;
int64_t* out;
int precision;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


__device__ void operator()(cudf::size_type const idx) const
{
auto const num_regs = 1ull << precision;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be used to compare with signed int later, thus it should not be unsigned here.

Suggested change
auto const num_regs = 1ull << precision;
auto const num_regs = 1 << precision;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr)
{
CUDF_EXPECTS(precision >= 4, "HyperLogLogPlusPlus requires precision is bigger than 4.");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
CUDF_EXPECTS(precision >= 4, "HyperLogLogPlusPlus requires precision is bigger than 4.");
CUDF_EXPECTS(precision >= 4, "HyperLogLogPlusPlus requires precision bigger than 4.");

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 88 to 89
auto const input_iter = cudf::detail::make_counting_transform_iterator(
0, [&](int i) { return input.child(i).begin<int64_t>(); });
Copy link
Collaborator

@ttnghia ttnghia Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a CUDF_EXPECTS to check for input type too (struct of longs).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.
Now all the outer functions check:

  CUDF_EXPECTS(input.type().id() == cudf::type_id::STRUCT,
               "HyperLogLogPlusPlus buffer type must be a STRUCT of long columns.");
  for (auto i = 0; i < input.num_children(); i++) {
    CUDF_EXPECTS(input.child(i).type().id() == cudf::type_id::INT64,
                 "HyperLogLogPlusPlus buffer type must be a STRUCT of long columns.");
  }

@res-life
Copy link
Collaborator Author

build

@res-life
Copy link
Collaborator Author

Verified Host UDF successfully via NVIDIA/spark-rapids#11638

@res-life res-life marked this pull request as ready for review December 17, 2024 13:20
@ttnghia
Copy link
Collaborator

ttnghia commented Dec 18, 2024

Need to wait for the dependencies to be merged first before we can build.

int64_t const precision, // num of bits for register addressing, e.g.: 9
int* const registers_output_cache, // num is num_groups * num_registers_per_sketch
int* const registers_thread_cache, // num is num_threads * num_registers_per_sketch
cudf::size_type* const group_lables_thread_cache // save the group lables for each thread
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: labels?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

* sketch. Input is a struct column with multiple long columns which is
* consistent with Spark. Output is a struct scalar with multiple long values.
*/
Reduction_MERGE(1),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming convention should be consistent with GroupByMerge.

Suggested change
Reduction_MERGE(1),
ReductionMerge(1),

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

/**
* HyperLogLogPlusPlus(HLLPP) host UDF aggregation utils
*/
public class HyperLogLogPlusPlusHostUDF {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines 34 to 38
switch (agg_type) {
case 0: return spark_rapids_jni::create_hllpp_reduction_host_udf(precision);
case 1: return spark_rapids_jni::create_hllpp_reduction_merge_host_udf(precision);
case 2: return spark_rapids_jni::create_hllpp_groupby_host_udf(precision);
default: return spark_rapids_jni::create_hllpp_groupby_merge_host_udf(precision);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
switch (agg_type) {
case 0: return spark_rapids_jni::create_hllpp_reduction_host_udf(precision);
case 1: return spark_rapids_jni::create_hllpp_reduction_merge_host_udf(precision);
case 2: return spark_rapids_jni::create_hllpp_groupby_host_udf(precision);
default: return spark_rapids_jni::create_hllpp_groupby_merge_host_udf(precision);
switch (agg_type) {
case 0: return spark_rapids_jni::create_hllpp_reduction_host_udf(precision);
case 1: return spark_rapids_jni::create_hllpp_reduction_merge_host_udf(precision);
case 2: return spark_rapids_jni::create_hllpp_groupby_host_udf(precision);
case 3: return spark_rapids_jni::create_hllpp_groupby_merge_host_udf(precision);
default: CUDF_FAIL("Invalid aggregation type.");

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 27 to 47
/**
* The number of bits that is required for a HLLPP register value.
*
* This number is determined by the maximum number of leading binary zeros a
* hashcode can produce. This is equal to the number of bits the hashcode
* returns. The current implementation uses a 64-bit hashcode, this means 6-bits
* are (at most) needed to store the number of leading zeros.
*/
constexpr int REGISTER_VALUE_BITS = 6;

// MASK binary 6 bits: 111-111
constexpr uint64_t MASK = (1L << REGISTER_VALUE_BITS) - 1L;

// This value is 10, one long stores 10 register values
constexpr int REGISTERS_PER_LONG = 64 / REGISTER_VALUE_BITS;

// XXHash seed, consistent with Spark
constexpr int64_t SEED = 42L;

// max precision, if require a precision bigger than 18, then use 18.
constexpr int MAX_PRECISION = 18;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these values need to be exposed to the public? Otherwise, please move them to the source file.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, moved to source file.

Comment on lines 42 to 45
template <typename cudf_aggregation>
struct hllpp_udf : cudf::host_udf_base {
static_assert(std::is_same_v<cudf_aggregation, cudf::reduce_aggregation> ||
std::is_same_v<cudf_aggregation, cudf::groupby_aggregation>);
Copy link
Collaborator

@ttnghia ttnghia Jan 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cudf_aggregation>
struct hllpp_udf : cudf::host_udf_base {
static_assert(std::is_same_v<cudf_aggregation, cudf::reduce_aggregation> ||
std::is_same_v<cudf_aggregation, cudf::groupby_aggregation>);
struct hllpp_udf : cudf::groupby_host_udf, cudf::reduce_host_udf {

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

Comment on lines +60 to +64
[[nodiscard]] output_type operator()(host_udf_input const& udf_input,
rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr) const override
{
if constexpr (std::is_same_v<cudf_aggregation, cudf::reduce_aggregation>) {
Copy link
Collaborator

@ttnghia ttnghia Jan 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the new interfaces, this needs to be separated into two separate operator() functions for reduction/groupby.

Comment on lines 101 to 131
[[nodiscard]] output_type get_empty_output(
[[maybe_unused]] std::optional<cudf::data_type> output_dtype,
rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr) const override
{
int num_registers = 1 << precision;
int num_long_cols = num_registers / REGISTERS_PER_LONG + 1;
auto const results_iter = cudf::detail::make_counting_transform_iterator(
0, [&](int i) { return cudf::make_empty_column(cudf::data_type{cudf::type_id::INT64}); });
auto children =
std::vector<std::unique_ptr<cudf::column>>(results_iter, results_iter + num_long_cols);

if constexpr (std::is_same_v<cudf_aggregation, cudf::reduce_aggregation>) {
// reduce
auto host_results_view_iter = thrust::make_transform_iterator(
children.begin(), [](auto const& results_column) { return results_column->view(); });
auto views = std::vector<cudf::column_view>(host_results_view_iter,
host_results_view_iter + num_long_cols);
auto table_view = cudf::table_view{views};
auto table = cudf::table(table_view);
return std::make_unique<cudf::struct_scalar>(std::move(table), true, stream, mr);
} else {
// groupby
return cudf::make_structs_column(0,
std::move(children),
0, // null count
rmm::device_buffer{}, // null mask
stream,
mr);
}
}
Copy link
Collaborator

@ttnghia ttnghia Jan 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[[nodiscard]] output_type get_empty_output(
[[maybe_unused]] std::optional<cudf::data_type> output_dtype,
rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr) const override
{
int num_registers = 1 << precision;
int num_long_cols = num_registers / REGISTERS_PER_LONG + 1;
auto const results_iter = cudf::detail::make_counting_transform_iterator(
0, [&](int i) { return cudf::make_empty_column(cudf::data_type{cudf::type_id::INT64}); });
auto children =
std::vector<std::unique_ptr<cudf::column>>(results_iter, results_iter + num_long_cols);
if constexpr (std::is_same_v<cudf_aggregation, cudf::reduce_aggregation>) {
// reduce
auto host_results_view_iter = thrust::make_transform_iterator(
children.begin(), [](auto const& results_column) { return results_column->view(); });
auto views = std::vector<cudf::column_view>(host_results_view_iter,
host_results_view_iter + num_long_cols);
auto table_view = cudf::table_view{views};
auto table = cudf::table(table_view);
return std::make_unique<cudf::struct_scalar>(std::move(table), true, stream, mr);
} else {
// groupby
return cudf::make_structs_column(0,
std::move(children),
0, // null count
rmm::device_buffer{}, // null mask
stream,
mr);
}
}
[[nodiscard]] std::unique_ptr<cudf::column> get_empty_output(
rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr) const override
{
int num_registers = 1 << precision;
int num_long_cols = num_registers / REGISTERS_PER_LONG + 1;
auto const results_iter = cudf::detail::make_counting_transform_iterator(
0, [&](int i) { return cudf::make_empty_column(cudf::data_type{cudf::type_id::INT64}); });
auto children =
std::vector<std::unique_ptr<cudf::column>>(results_iter, results_iter + num_long_cols);
return cudf::make_structs_column(0,
std::move(children),
0, // null count
rmm::device_buffer{}, // null mask
stream,
mr);
}

@ttnghia
Copy link
Collaborator

ttnghia commented Jan 4, 2025

The interface for HOST_UDF has been changed in libcudf. Please update the code accordingly.
Ref: rapidsai/cudf#17645

Comment on lines +49 to +58
[[nodiscard]] input_data_attributes get_required_data() const override
{
if constexpr (std::is_same_v<cudf_aggregation, cudf::reduce_aggregation>) {
return {reduction_data_attribute::INPUT_VALUES};
} else {
return {groupby_data_attribute::GROUPED_VALUES,
groupby_data_attribute::GROUP_OFFSETS,
groupby_data_attribute::GROUP_LABELS};
}
}
Copy link
Collaborator

@ttnghia ttnghia Jan 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new interface does not have this function anymore.

Suggested change
[[nodiscard]] input_data_attributes get_required_data() const override
{
if constexpr (std::is_same_v<cudf_aggregation, cudf::reduce_aggregation>) {
return {reduction_data_attribute::INPUT_VALUES};
} else {
return {groupby_data_attribute::GROUPED_VALUES,
groupby_data_attribute::GROUP_OFFSETS,
groupby_data_attribute::GROUP_LABELS};
}
}

Comment on lines +66 to +67
auto const& input_values =
std::get<cudf::column_view>(udf_input.at(reduction_data_attribute::INPUT_VALUES));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the new interface, the input column is passed as function parameter.

Suggested change
auto const& input_values =
std::get<cudf::column_view>(udf_input.at(reduction_data_attribute::INPUT_VALUES));

Comment on lines +79 to +80
auto const& group_values =
std::get<cudf::column_view>(udf_input.at(groupby_data_attribute::GROUPED_VALUES));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto const& group_values =
std::get<cudf::column_view>(udf_input.at(groupby_data_attribute::GROUPED_VALUES));
auto const group_values = get_grouped_values();

Comment on lines +82 to +83
auto const group_offsets = std::get<cudf::device_span<cudf::size_type const>>(
udf_input.at(groupby_data_attribute::GROUP_OFFSETS));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto const group_offsets = std::get<cudf::device_span<cudf::size_type const>>(
udf_input.at(groupby_data_attribute::GROUP_OFFSETS));
auto const group_offsets = get_group_offsets();

Comment on lines +85 to +86
auto const group_labels = std::get<cudf::device_span<cudf::size_type const>>(
udf_input.at(groupby_data_attribute::GROUP_LABELS));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto const group_labels = std::get<cudf::device_span<cudf::size_type const>>(
udf_input.at(groupby_data_attribute::GROUP_LABELS));
auto const group_labels = get_group_labels();

}

/**
* @brief create an empty struct scalar
Copy link
Collaborator

@ttnghia ttnghia Jan 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @brief create an empty struct scalar
* @brief Create an empty column when the input is empty.

Copy link
Collaborator

@revans2 revans2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't read through all of the code, I don't know the C++ well enough to feel like it would be good for me to review it. But we do need some kind of test added here to at least show that the code is minimally working. We should not rely only on spark-rapids to verify the code.

}();
CUDF_EXPECTS(udf_ptr != nullptr, "Invalid HyperLogLogPlusPlus(HLLPP) UDF instance.");

return reinterpret_cast<jlong>(udf_ptr.release());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is this pointer released? It looks like this is a memory leak after this.

/**
* Create a HyperLogLogPlusPlus(HLLPP) host UDF
*/
public static long createHLLPPHostUDF(AggregationType type, int precision) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this API leaks. We might want to think about how to redesign this and the java APIs to be more robust.

At a minimum we need an API that will let us free the HostUDF aggregation when we are done with it, but I really would prefer to have a class that acts as a wrapper around the long which is an AutoCloseable so we can use the withResource API in spark-rapids.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I forgot about closing it. Yes, we can auto-close the wrapper class https://github.com/rapidsai/cudf/blob/bbf4f7824c23c0c482f52bafdf1ece1213da2f65/java/src/main/java/ai/rapids/cudf/HostUDFWrapper.java#L28.
I'll add a fix for it shortly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this class must extend HostUPFWrapper and override the required methods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants